benchmark function
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- (3 more...)
Appendix
For both the RBF and the Matèrn-3/2 kernels, we consider three possible ranges of lengthscales, including [0.07,0.13],[0.17,0.23],[0.27,0.33]. Forbothtraining andthefastadaptation during testing, weapply 5-shot adaptation (i.e., 5black-box functions areused foradaptation) and set the number offew-shot gradient updates tobe5. Specifically, the amount of translation added to each dimension ofx is selected from the range [ 0.1xlim,0.1xlim]uniformly To address the continuous input domains and achieve a fair comparison between FSAF and MetaBO, we leverage the hierarchical gridding method similar to that in [27] for the maximization procedure of the AFs. The validation set is used for both few-shot adaptation of FSAF as well as finding the lengthscale parameter of the GP surrogate modelforposteriorinference.
- North America > United States > New York (0.05)
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Karlsruhe (0.04)
- Asia > Taiwan (0.04)
- Europe > Denmark > Capital Region > Copenhagen (0.05)
- Europe > Poland (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Europe > France > Île-de-France (0.04)
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- (3 more...)
- North America > United States > New York (0.04)
- Asia > Taiwan (0.04)
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Karlsruhe (0.04)
Stagnation in Evolutionary Algorithms: Convergence $\neq$ Optimality
Stagnation refers to the situation where the best solution found so far remains unchanged over time, which is a common phenomenon in evolutionary computation, as most evolutionary algorithms are stochastic [1, 2]. When stagnation occurs, it is often blamed on bad luck, with the assumption that the evolutionary algorithm has become stuck in a local minimum. As a result, significant efforts have been dedicated to designing new strategies to help existing algorithms escape such traps, or to conducting stability analysis of evolutionary algorithms to ensure convergence. This leads to the proposition that stagnation impedes convergence, and that convergence inherently signifies optimality. 1 However, after a thorough analysis of stagnation, convergence and optimality in this study, it is found that this perspective is misleading. The main contributions of this study can be summarized as follows: 1. This study is the first to highlight that the stagnation of an individual can actually facilitate the convergence of the entire population.
Parameter Tuning of the Firefly Algorithm by Three Tuning Methods: Standard Monte Carlo, Quasi-Monte Carlo and Latin Hypercube Sampling Methods
Joy, Geethu, Huyck, Christian, Yang, Xin-She
There are many different nature-inspired algorithms in the literature, and almost all such algorithms have algorithm-dependent parameters that need to be tuned. The proper setting and parameter tuning should be carried out to maximize the performance of the algorithm under consideration. This work is the extension of the recent work on parameter tuning by Joy et al. (2024) presented at the International Conference on Computational Science (ICCS 2024), and the Firefly Algorithm (FA) is tuned using three different methods: the Monte Carlo method, the Quasi-Monte Carlo method and the Latin Hypercube Sampling. The FA with the tuned parameters is then used to solve a set of six different optimization problems, and the possible effect of parameter setting on the quality of the optimal solutions is analyzed. Rigorous statistical hypothesis tests have been carried out, including Student's t-tests, F-tests, non-parametric Friedman tests and ANOVA. Results show that the performance of the FA is not influenced by the tuning methods used. In addition, the tuned parameter values are largely independent of the tuning methods used. This indicates that the FA can be flexible and equally effective in solving optimization problems, and any of the three tuning methods can be used to tune its parameters effectively.
- Asia > Middle East > UAE > Dubai Emirate > Dubai (0.04)
- North America > United States (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Coherence-based Approximate Derivatives via Web of Affine Spaces Optimization
Rakita, Daniel, Liang, Chen, Wang, Qian
Computing derivatives is a crucial subroutine in computer science and related fields as it provides a local characterization of a function's steepest directions of ascent or descent. In this work, we recognize that derivatives are often not computed in isolation; conversely, it is quite common to compute a \textit{sequence} of derivatives, each one somewhat related to the last. Thus, we propose accelerating derivative computation by reusing information from previous, related calculations-a general strategy known as \textit{coherence}. We introduce the first instantiation of this strategy through a novel approach called the Web of Affine Spaces (WASP) Optimization. This approach provides an accurate approximation of a function's derivative object (i.e. gradient, Jacobian matrix, etc.) at the current input within a sequence. Each derivative within the sequence only requires a small number of forward passes through the function (typically two), regardless of the number of function inputs and outputs. We demonstrate the efficacy of our approach through several numerical experiments, comparing it with alternative derivative computation methods on benchmark functions. We show that our method significantly improves the performance of derivative computation on small to medium-sized functions, i.e., functions with approximately fewer than 500 combined inputs and outputs. Furthermore, we show that this method can be effectively applied in a robotics optimization context. We conclude with a discussion of the limitations and implications of our work. Open-source code, visual explanations, and videos are located at the paper website: \href{https://apollo-lab-yale.github.io/25-RSS-WASP-website/}{https://apollo-lab-yale.github.io/25-RSS-WASP-website/}.
Intuitive Analysis of the Quantization-based Optimization: From Stochastic and Quantum Mechanical Perspective
In this paper, we present an intuitive analysis of the optimization technique based on the quantization of an objective function. Quantization of an objective function is an effective optimization methodology that decreases the measure of a level set containing several saddle points and local minima and finds the optimal point at the limit level set. To investigate the dynamics of quantization-based optimization, we derive an overdamped Langevin dynamics model from an intuitive analysis to minimize the level set by iterative quantization. We claim that quantization-based optimization involves the quantities of thermodynamical and quantum mechanical optimization as the core methodologies of global optimization. Furthermore, on the basis of the proposed SDE, we provide thermodynamic and quantum mechanical analysis with Witten-Laplacian. The simulation results with the benchmark functions, which compare the performance of the nonlinear optimization, demonstrate the validity of the quantization-based optimization.
- North America > United States > New York > New York County > New York City (0.04)
- Asia > South Korea > Daejeon > Daejeon (0.04)
- Asia > Middle East > Jordan (0.04)
Graph Neural Networks Are Evolutionary Algorithms
In this paper, we reveal the intrinsic duality between graph neural networks (GNNs) and evolutionary algorithms (EAs), bridging two traditionally distinct fields. Building on this insight, we propose Graph Neural Evolution (GNE), a novel evolutionary algorithm that models individuals as nodes in a graph and leverages designed frequency-domain filters to balance global exploration and local exploitation. Through the use of these filters, GNE aggregates high-frequency (diversity-enhancing) and low-frequency (stability-promoting) information, transforming EAs into interpretable and tunable mechanisms in the frequency domain. Extensive experiments on benchmark functions demonstrate that GNE consistently outperforms state-of-the-art algorithms such as GA, DE, CMA-ES, SDAES, and RL-SHADE, excelling in complex landscapes, optimal solution shifts, and noisy environments. Its robustness, adaptability, and superior convergence highlight its practical and theoretical value. Beyond optimization, GNE establishes a conceptual and mathematical foundation linking EAs and GNNs, offering new perspectives for both fields. Its framework encourages the development of task-adaptive filters and hybrid approaches for EAs, while its insights can inspire advances in GNNs, such as improved global information propagation and mitigation of oversmoothing. GNE's versatility extends to solving challenges in machine learning, including hyperparameter tuning and neural architecture search, as well as real-world applications in engineering and operations research. By uniting the dynamics of EAs with the structural insights of GNNs, this work provides a foundation for interdisciplinary innovation, paving the way for scalable and interpretable solutions to complex optimization problems.
- North America > United States > Illinois (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Asia > China > Guizhou Province (0.04)
- Asia > China > Anhui Province > Hefei (0.04)
- Overview (1.00)
- Research Report > New Finding (0.67)